Convergence Analysis of Neural Networks that Solve Linear Programming Problems
نویسندگان
چکیده
Artificial neural networks for solving different variants of linear programming problems are proposed and analyzed through Liapunov direct method. An energy function with an exact penalty term is associated to each variant and leads to a discontinuous dynamic gradient system model of an artificial neural network. The objective is to derive conditions that the network gains must satisfy in order to ensure convergence to the solution set of the linear programming problems. This objective is attained by representing the neural networks in a Persidskii type form and using an associated diagonal type Liapunov function.
منابع مشابه
Solving Linear Semi-Infinite Programming Problems Using Recurrent Neural Networks
Linear semi-infinite programming problem is an important class of optimization problems which deals with infinite constraints. In this paper, to solve this problem, we combine a discretization method and a neural network method. By a simple discretization of the infinite constraints,we convert the linear semi-infinite programming problem into linear programming problem. Then, we use...
متن کاملA Recurrent Neural Network Model for Solving Linear Semidefinite Programming
In this paper we solve a wide rang of Semidefinite Programming (SDP) Problem by using Recurrent Neural Networks (RNNs). SDP is an important numerical tool for analysis and synthesis in systems and control theory. First we reformulate the problem to a linear programming problem, second we reformulate it to a first order system of ordinary differential equations. Then a recurrent neural network...
متن کاملA Recurrent Neural Network for Solving Strictly Convex Quadratic Programming Problems
In this paper we present an improved neural network to solve strictly convex quadratic programming(QP) problem. The proposed model is derived based on a piecewise equation correspond to optimality condition of convex (QP) problem and has a lower structure complexity respect to the other existing neural network model for solving such problems. In theoretical aspect, stability and global converge...
متن کاملAn analysis of a class of neural networks for solving linear programming problems
A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in fi...
متن کاملMathematical Analysis for Neural Networks That Simulates the Penalty Methods in Nonlinear Programming
Some neural network models have been suggested to solve linear and quadratic programming problems. The Kennedy and Chua modell5] is one of those networks. In this paper results about the convergence of the model are obtained. Another related problem is how to choose a parameter value ~ s so that the equilibrium point of the network immediately and properly approximates the original solution. Su...
متن کامل